#continuous validation
Explore tagged Tumblr posts
xlmcv · 7 years ago
Text
xLM for DevOps - Introducing vCD (Validated Continuous Delivery)
INTRODUCTION
The main objective of this post to answer the following questions.  I know a picture is worth a thousand words and so I will do this with illustrations.
Can "Validation" and "DevOps" be used in the same sentence (like vCD - Validated Continuous Delivery)?
Is DevOps a viable model for validated apps in GxP Environments?
How can one ensure the "validation fidelity" in a DevOps environment?
First, let us look at some definitions:
DevOps (a clipped compound of "development" and "operations") is a software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops). The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. DevOps aims at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives. (Source:  Wikipedia)
Validated Continuous Delivery (vCD) is a software engineering culture and practice which aims at shorter development and validation cycles, increased deployment frequency of "compliant" releases, in close alignment with business and regulatory objectives.  (Source: xLM)
MODEL SUMMARY
In order to answer the three questions above, I will present a simple working model.  This model includes the following steps:
Code Development - Code is developed using an Agile Model and committed to the Repository.
Continuous Integration (CI) - The CI Server builds the release.
Code Deployment - The Build is deployment to the Test Cluster.
Build Validation - The Build is validated by Continuous Validation (CV) and Executed Protocols are posted for QA Review.
QA Review - QA reviews the executed protocol and approves the results.
Summary Report - A Summary Report is automatically generated that captures all the steps in the Pipeline with detailed evidences.
Deployment to Production - This activity can be scheduled and managed by the Deployment Team.
The above steps are illustrated in the following diagram:
The above pipeline was built in AWS CodePipeline and is shown in the figure below:
MODEL DETAILS
Code Pipeline - Step 1 - Git Repository
The source code was managed using a Git Repo.  The first step in the pipeline was setup to monitor the Repo and would execute when a commit was made.
Code Pipeline - Step 2 - Jenkins Build
The pipeline automatically retrieved the source code package and sent it to the Jenkins Continuous Integration Server.  The Jenkins server was setup to build the deployment package and deliver it to the Pipeline.
Code Pipeline - Step 3 - Code Deploy
The Pipeline was setup to automatically deploy the Build to a cluster of three (3) test servers using AWS CodeDeploy  
Code Pipeline - Step 4 - Test (Release Validation)
The Pipeline was setup to create a Test (Validation) Session in xLM and this session in turn was setup to launch the xLM Test Automation Model.
The Model would run all the required tests (IQ, OQ, UAT) and generate an Execution Report.
Code Pipeline - Step 5 - QA Approval to Release
The Pipeline was setup to stop execution until the QA reviewed the test results and approved it. 
Code Pipeline - Step 6 - Automatic Summary Report Generation
The Pipeline was setup to create a Summary Report Session in xLM and this session in turn was setup to launch the xLM Report Automation Model.
The Model would automatically capture all the evidence required to generate the Pipeline Validation Summary Report.
CONCLUSION
vCD (Validated Continuous Delivery) is a reality.  The DevOps model can be successfully adopted to continuously deliver GxP Apps.  The above Six Step Model provides the framework along with the toolset to implement "continuous validation" in the enterprise.  Such a model will not only compress deployment timelines but will increase compliance using automation and reduces cost significantly. 
The Fine Print  - The above presentation of the use case is a simplified version for illustration purposes.  This use case does not include other steps like planning, requirements, traceability, etc..   xLM can setup end to end DevOps Framework for your environments (Agile, Hybrid, Waterfall) that includes all the SDLC steps for GxP Apps including IaaS / PaaS Qualifications.
0 notes
xlmcv · 7 years ago
Text
ServiceNow: Data Integrity Guide for GxP Compliance - Tables
This post provides a framework for configuring the ServiceNow Platform Tables to meet today's data integrity standards in order to comply with 21 CFR Part 11 and other predicate GxP regulations.  I will attempt to cover:
How to leverage in-built ServiceNow features to ensure Tables are configured for GxP Compliance?
How to design the configuration qualification (CQ) tests?
How to ensure "validated status" (VS) on a ongoing basis?
Step 1 - Table Configuration
Data Tables
You must identify all the Data Tables utilized by your apps that host GxP data (for example: incident table, change_request table).  For each, table that hosts GxP data, turn on the audit feature.
You must also turn on the auditing for inserts (set the system property glide.sys.audit_inserts to TRUE).
You must ensure that no_audit_delete is NOT set to FALSE for any of the Data Tables (default value is TRUE).
You must also ensure that whitelisting is not enabled for the table (all fields should be audited) and none of the fields are blacklisted for auditing (no field should be excluded from auditing).
System Tables
Follow the above listed steps under Data Tables for the following System Tables:
sys_schema_change sys_properties sys_user sys_user_group sys_user_role sys_user_has_role sys_user_grmember sys_group_has_role sys_security_acl sys_security_acl_role
Update glide.ui.audit_deleted_tables system property to include the above system tables (this will enable delete tracking for the included system tables)
Step 2 - Configuration Qualification
Design automated tests to:
Qualify the configuration settings in Step 1 above.
Perform smoke tests to ensure Table Audit feature (insert, update and delete) is functioning as expected.
Step 3 - Continuous Validation
Design a continuous validation (CV) framework to ensure compliance on an ongoing basis.  This CV framework should ensure the following:
Check the configuration settings match with compliance best practices (Step 1 above).
Perform smoke tests to ensure Table Audit feature (insert, update and delete) is functioning as expected.
Data and Configuration Compliance Checks
Query the sys_audit table to ensure data compliance is met.  For example, a flag has to be raised if records were deleted from Tables where deletion is prohibited.
Query the sys_schema_change table to ensure configuration compliance is met.  For example, a flag has to be raised if configuration changes associated with the locked tables are found.
By following the above three steps, a robust Table Level Data Integrity Framework can be deployed leveraging the built-in ServiceNow Platform features.  Turning on audit trail is the easy part, but enforcing compliance based on the data in the audit trail is the difficult one.  By implementing a continuous validation framework, you not only ensure that the best practice settings are always turned on, but also audit trail data is mined to ensure compliance.
Further Reading
Continuous Validation for ServiceNow - How Is It Done?
How are you fulfilling the FDA's Audit Trail expectations for Data Integrity?
0 notes
xlmcv · 7 years ago
Text
Continuous Validation for ServiceNow - HOW IS IT DONE?
What are the key steps involved in continuously validating a ServiceNow App?
The above diagram depicts the key elements of a Continuous Validation Program for  ServiceNow Apps (Note:  ServiceNow IaaS/PaaS Qualification is not included here. The focus of this article is on App Validation).
One has to bear in mind that the underlying IaaS and PaaS infrastructure is constantly changing. In fact, the Cloud App itself is continuously changing. In the new cloud world, it does not make any value-sense to pin to an "ancient" version.  Thus this Continuous Validation Framework is designed to mitigate these risks and ensure that your ServiceNow app is maintained in a validated state.
"Continuous validation is providing documented evidence to certify that a cloud app not only met the pre-established acceptance criteria, but “continuous” to meet thus mitigating the risk of unknown changes."
Requirements Definition
This step provides the foundation for the continuous validation framework. We clearly specify Functional, Non-functional, Regulatory, Performance, Security, Logging, Disaster Recovery, Interface requirements, etc. working with the Customer/End User.
Risk Assessment
In our experience very rarely a logical, useful risk assessment is performed, let alone applying it to testing strategies. The output of our risk assessment is applied to our testing strategies to determine: What features to test? What should be the extent of negative testing? What type of testing strategies to utilize (for eg: datasets to use, N Pair Testing)?
We establish a risk-based approach to qualifying each requirement.   Our risk-based approach is achieved by assigning a Risk Priority to each requirement.  A typical Risk Assessment may be as follows:
High – A risk priority of High shall be assigned to a critical requirement which meets the following criteria:
- is not “out of the box” (OOTB) functionality AND - is a legal/regulatory requirement.
All High priority requirements will be tested (both positive and negative testing).
Moderate – A risk priority of Medium shall be assigned to an important requirement which meets the following criteria:
- is achieved with “out of the box” features; AND - is a legal/regulatory requirement;
All Moderate Priority requirements will be tested (positive testing) or verified (configuration verification).
Minimum – A risk priority of Low shall be assigned to a “nice to have” requirement which meets the following criteria:
- is achieved with “out of the box” software features.
Minimum Priority requirements will not be tested.
Specification Definition
We specify Configuration, Workflow, Interface, Security, Log Management, etc.. specifications to meet your requirements defined earlier.
Traceability Matrix
We establish traceability between requirements and testing to ensure coverage.
Test Automation Scripts
xLM leverages a Model Based Test Automation framework for developing various models to validate your ServiceNow apps. We use a data designer to generate test data including randomized data.  We also use combinatorial testing strategies to reduce the number of iterations while increasing the test effectiveness.
Test Model Validation
The test automation model is validated to ensure that it is meets the specified objectives. This validation effort is based on  a model design that generates good execution reports. All our execution reports provide enough evidence so that it is, to a certain extent are self-validating.
Test Execution
The model based test automation approach provides us with the flexibility to re-purpose the same model to conduct various types of tests (smoke, regression, greedy path, optimal path, load, performance, etc..). Also, such a framework lends itself more conducive to updates (remember your cloud app is constantly changing...so will your test automation framework!). Our framework is well suited for continuously running validation test scripts (say on a daily basis) - you can even randomize your smoke and regression tests continuously!
Validation Reporting
A robust ALM (Application Lifecycle Management) tool forms the heart of our Continuous Validation Platform. Our goal is to remove paper and manual generation of reports (in short: Microsoft Word will not play any role here!). Our ALM tool provides real-time dashboards, KPIs, summary reports, test deviation reports and more. Your real-time Validation Health Dashboard becomes a reality.
Conclusion
xLM platform is designed to accommodate high velocity of changes with built-in support for robust testing strategies.  It incorporates modern technology and frameworks to ensure data integrity requirements are met as expected by the Regulatory Agencies.   xLM is the right track for ServiceNow validation!
Why xLM for ServiceNow?
0 notes
xlmcv · 7 years ago
Text
A Step-by-Step Guide to "continuously" Qualify Microsoft Azure PaaS and Serverless Architectures
I very frequently get asked whether Azure Platform-as-a Service (PaaS which includes Serverless Architectures) can be qualified as well?  Surprisingly, this question is asked even by Azure users who agree that Azure IaaS can be qualified.  In this blog post I will provide a prescriptive guide to intelligently qualify Azure PaaS and maintain it in a qualified state (QS). 
Before reading ahead, one must understand that a Software App that resides on IaaS/PaaS services is “validated” while Iaas/PaaS services are “qualified”.   A SaaS application can be truly “validated” only if the underlying IaaS/PaaS services are fully qualified.
Step-by-Step Guide
Step 1:  Establish the “formal” user requirements for your Azure PaaS service.  The user requirements are the “intended use” of the PaaS service across your organization.  Once this Azure PaaS service is qualified for its intended use, any development team within your company can use the “qualified” PaaS service in their projects.  
When developing the intended use, I recommend that you categorize your requirements into:
Functional
Security & Compliance
Availability
Performance
Step 2:  Establish a risk-based approach to qualifying each requirement.   Your risk-based approach can be achieved by assigning a Risk Priority to each requirement as follows:
High – A risk priority of High shall be assigned to a critical requirement which meets the following criteria:
- is not “out of the box” (OOTB) functionality AND - is a legal/regulatory requirement.
All High priority requirements will be tested (both positive and negative testing). Note:  High risk priority may not at all be applicable in this context.
Moderate – A risk priority of Medium shall be assigned to an important requirement which meets the following criteria:
- is achieved with “out of the box” PaaS features; AND - is a legal/regulatory requirement;
All Moderate Priority requirements will be tested (positive testing) or verified (configuration verification).
Minimum – A risk priority of Low shall be assigned to a “nice to have” requirement which meets the following criteria: - is achieved with “out of the box” software features.
Minimum Priority requirements will not be tested.
Step 3: Build a "continuous" qualification framework to qualify your Azure PaaS service. This framework can automatically perform various tests to ensure all the applicable High/Medium Priority requirements are met. The test execution reports with all evidence are automatically generated.   IT Quality reviews the results and then certifies the Azure PaaS service for its intended use.
Step 4: Make your qualified Azure PaaS Service available in your global catalog.  Your global teams can deploy this PaaS service any number of times without worrying about qualification (Qualify it once and use it many times!).
Note:  When such a qualified service is provisioned, the end-user needs to review your SLAs (in this case the intended use requirements that the PaaS service is continuously qualified for).  If they have additional requirements, then such requirements need to be managed under change control and the continuous qualification framework updated accordingly.
Step 5: Establish a qualification schedule wherein you constantly (for example: daily) run your qualification tests and automatically provide evidence that the “qualified state (QS)” of the Azure PaaS service has not drifted (See FAQs below for more details).  
Conclusion
By following the above five (5) steps, you can build and maintain a qualified PaaS service that is GxP compliant and always "audit ready".   The above framework uses the mantra "qualify it once and use it many times.
Frequently Asked Questions (FAQs)
Microsoft Azure releases changes constantly. How will I maintain change control?
A true "Cloud" is designed to constantly release changes so that the end customer can leverage these innovations and thus increase productivity. Azure PaaS is no exception here. However, this presents a dilemma for the traditional validation folks who are used to reviewing each change and then addressing it one way or the other.  
If you want to embrace the Cloud, the compliance perspective must change from examining every change by the Cloud Provider (which is practically impossible considering the velocity of changes) to ensuring your requirements are met constantly. This can be achieved with the "continuous" validation framework whereby you are constantly (for example: daily) testing to ensure your requirements are met in spite of the changes.   
How does "continuous" qualification really work?
Continuous Qualification is GxP compliant and based on a sophisticated Model Based Testing Framework.  Once the Continuous Qualification model is built, it can be used to perform initial qualification of a PaaS service. It can be run at regular intervals to "continuously qualify" with no human intervention ("lights out mode").  This approach enables cost effective testing on a continuous basis thus enabling deployment of GxP workloads in the public cloud.  
Why should monitoring requirements be included in Step 1 above?  Isn’t addressing only Functional & Regulatory Compliance requirements sufficient?
Once the PaaS service is built and deployed, you need to monitor its health and also ensure that the Qualified State (QS) drift has not occurred.  QS drift can occur when changes to the deployed service are made (either intentional or unintentional) after it is qualified by bypassing the change control process.  Also, Microsoft Azure provides various services that will help you provide further assurance that the “qualified” service is functioning as expected.
Automation & Control enables continuous services and compliance with automation and configuration management. You can apply and monitor configurations using a highly available pull service, and fix configuration drift without manual intervention. You can combine change tracking with configuration management to identify and apply configurations and enable compliance. This service will enable compliance with the Qualified State (QS). It will bring to your notice if a QS Drift has occurred.  
Log Analytics enables you to quickly connect and collect log data. You can correlate and analyze using powerful machine learning constructs. You can transform your Azure activity data into actionable insights. You can automate and trigger remediation with Azure Automation, Logic Apps and Functions.  
Azure Monitor will help you get detailed, up-to-date performance and utilization data, access to the activity log that tracks every API call, and diagnostic logs that help you debug issues.  Azure Monitor gives you the basic tools you need to analyze and diagnose any operational issue, so you can resolve it efficiently.  
Azure Advisor helps you optimize across four different areas – high availability, performance, security, and cost – with all recommendations accessible in one place on the Azure portal. You can follow recommendations based on category and business impact.  
Security & Compliance helps you analyze events across multiple data sources and identify security risks. You can understand the scope and impact of threats and attacks to mitigate the damage of a security breach. You can understand the security posture of your entire environment regardless of the platform. You can capture all the log and event data required for security or compliance audits.  
Azure Policy helps you turn on built-in policies or build your own custom policies to enable security and management for Azure PaaS resources. You can choose to either enforce policies, or audit policy compliance against best practices.
Why are performance requirements included in Step1 above?  Isn’t addressing only Functional & Regulatory Compliance requirements sufficient?
Azure provides various tools to monitor the performance of your PaaS services.  By continuously monitoring the performance, you are also ensuring compliance.  For example, if a particular performance parameter falls outside the pre-set acceptable range limit, as compared to its historical data, an investigation may be warranted.   This may be a symptom resulting from a change or a patch that has been applied. 
Azure SQL Database Intelligent Insights lets you know what is happening with your database performance.   Intelligent Insights uses built-in intelligence to continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor performance. Once detected, a detailed analysis is performed that generates a diagnostics log with an intelligent assessment of the issue. This assessment consists of a root cause analysis of the database performance issue and, where possible, recommendations for performance improvements.
Further Reading...
Microsoft Trust Center:  Food and Drug Administration CFR Title 21 Part 11
Microsoft Trust Center: Compliance Offerings
Microsoft Azure GxP Guidelines
Continuous Qualification for Microsoft Azure:  How is it done?
0 notes
xlmcv · 7 years ago
Text
A Step-by-Step Guide to "continuously" Qualify Azure
"Azure makes "infrastructure-as-code" a reality.  My blog post will make "continuous-qualification" an absolute reality. "
The million dollar question is "Can Microsoft Azure be qualified?".  In this blog post I will provide a prescriptive guide on not just "how to qualify", but also how to maintain the qualified state intelligently.   
As we all know, initial qualification of IT Infrastructure is hard but pales in comparison to the effort involved in maintaining the qualified state (QS).   The model presented here ensures QS Integrity so that requalification can be avoided at all costs.   
Step-by-Step Guide
Azure Resource Manager (ARM) Template Qualification
Step 1:  Transform your infrastructure requirements into code.  This can be done by using ARM Templates.  Using ARM you create a template for your infrastructure.  Then, test your template and hand it over to IT Quality.  
Step 2: Build a "continuous" qualification framework to qualify your template.  This framework can automatically deploy the template in a TEST environment and perform various tests to ensure all the requirements are met.  Also, the test execution reports with all evidence are automatically generated.   IT Quality reviews the results and then certifies the ARM template.  
Step 3: Publish the qualified ARM template to the Service Catalog.   Your global teams can deploy this template any number of times without worrying about qualification (Qualify it once and use it many times!).
Azure Monitoring Toolset Qualification
In order to maintain your Azure Infrastructure in a Qualified State, you need to create a "qualified" catalog of Azure Monitoring Services.   Such services include Automation, Azure Monitor,  Automation & Control,  Application Insights, Azure Advisor, Azure Policy, Log Analytics, Security & Compliance, Azure Security Center, etc..
Step 4: Build a "continuous" qualification framework to qualify each monitoring service.  This framework can automatically qualify each service in a TEST environment and perform various tests to ensure all the requirements are met.  IT Quality reviews the results and then releases the service for Production Use.  
Step 5: Now each of the qualified monitoring services can be used globally without worrying about qualification.
Build Qualified Azure Infrastructure and Go Live
Step 6: Now your IT teams can build "qualified" infrastructure using ARM templates from the Service Catalog.  Also, they can monitor them using "qualified" services in order to maintain the "qualified state" of the infrastructure.
Conclusion
By following the above six (6) steps, you can build and maintain a qualified infrastructure that is GxP compliant and always "audit ready".    The above framework uses the mantra "qualify it once and use it many times", thus not only making it cost efficient but also baking in the best practices.  
Frequently Asked Questions (FAQ)
Microsoft Azure releases changes constantly.  How will I maintain change control? A true "cloud" is designed to constantly release changes so that the end customer can leverage these innovations and thus increase productivity.  Azure is no exception here.  However, this presents a dilemma for the traditional validation folks who are used to reviewing each change and then addressing it one way or the other.   If you want to embrace the Cloud, the compliance perspective must change from examining every change by the Cloud Provider (which is practically impossible considering the velocity of changes) to ensuring your requirements are met constantly.  This can be achieved with the "continuous" validation framework where by you are constantly (for example:  daily) testing to ensure your requirements are met in spite of the changes.    
How does "continuous" qualification really work? Continuous Qualification is GxP compliant and based on a sophisticated Model Based Testing Framework.   Once the Continuous Qualification model is built, it can be used to perform initial qualification of an infrastructure template or a monitoring service, for example.  It can be run at regular intervals to "continuously qualify" with no human intervention ("lights out mode").   This approach enables cost effective testing on a continuous basis thus enabling deployment of GxP workloads in the public cloud.  
What are the real advantages of qualifying an Infrastructure Template? Before the advent of "software defined" data center, the only way to qualify was to build first and then perform the qualification.  Now with Azure, building infrastructure is akin to writing code.  In other words, you script the infrastructure you want to build and then run it as many times as you want to consistently and within minutes stand up your virtual data center.   Now, this makes qualification cost effective and implementing IT best practices that much easier.  Azure does provide downloadable templates that you can start with and then modify them to meet your requirements.  For example, you can stand up a complex environment for SAP implementation within minutes using ARM templates.    You build the template using best practices (for regulatory, security compliance, etc..) that meet your specific requirements.  You then qualify the template and make it available in the Service Catalog for consumption throughout your organization.  
Why do we need the monitoring tools after the template is qualified? Once the infrastructure is built and deployed, you need to monitor its health and also ensure that the Qualified State (QS) drift has not occurred.   QS drift can occur when changes to the deployed infrastructure are made (either intentional or unintentional) after it is qualified by bypassing the change control process.   Automation & Control enables continuous services and compliance with automation and configuration management.  You can apply and monitor configurations using a highly-available pull service, and fix configuration drift without manual intervention.  You can combine change tracking with configuration management to identify and apply configurations and enable compliance. You can deliver orchestrated update management for servers.  This service will enable compliance with the Qualified State (QS).  It will bring to your notice if a QS Drift has occurred. Log Analytics enables you to quickly connect and collect log data from multiple sources.  You can correlate and analyze using powerful machine learning constructs.  You can transform your Azure activity data into actionable insights.  You can automate and trigger remediation with Azure Automation, Logic Apps and Functions. Azure Monitor will help you get detailed, up-to-date performance and utilization data, access to the activity log that tracks every API call, and diagnostic logs that help you debug issues.  Azure Monitor gives you the basic tools you need to analyze and diagnose any operational issue, so you can resolve it efficiently. Azure Advisor helps you optimize across four different areas – high availability, performance, security, and cost – with all recommendations accessible in one place on the Azure portal. You can follow recommendations based on category and business impact. Security & Compliance helps you analyze events across multiple data sources and identify security risks. You can understand the scope and impact of threats and attacks to mitigate the damage of a security breach.  You can understand the security posture of your entire environment regardless of the platform.  You can capture all of the log and event data required for security or compliance audits. Azure Policy helps you turn on built-in policies, or build your own custom policies to enable security and management for Azure resources.  You can choose to either enforce policies, or audit policy compliance against best practices.  
What other are measures should I consider to ensure Qualified State Integrity? You need to design your infrastructure based on IT best practices that meet your business, security and compliance requirements.  For example, you can define encryption requirements, permissions to resources (which roles apply to certain environments), which compute images are authorized (based on hardened images of servers you have authorized), and what kind of logging needs to be enabled.  Such security best practices can be enforced by using ARM templates. Our goal is to create a GxP audit-ready environment.  For example, Automation & Control allows you to capture the current state of any environment, which can then be compared with your “secure environment” rules. You can ensure that the controls are operating 100 percent at any point in time, versus traditional audit sampling methods or point-in-time reviews.
Further Reading...
Service Based Accretive GxP Qualification Model For Cloud Infrastructure
Continuous Qualification for AWS or Microsoft Azure - How is it done?
Continuous Validation - How Is It Done? 
0 notes
xlmcv · 7 years ago
Text
A Step-by-Step Guide to "continuously" Qualify AWS
"AWS makes "infrastructure-as-code" a reality.  My blog post will make "continuous-qualification" an absolute reality. "
The million dollar question is "Can AWS be qualified?".  In this blog post I will provide a prescriptive guide on not just "how to qualify", but also how to maintain the qualified state intelligently.   
As we all know, initial qualification of IT Infrastructure is hard but pales in comparison to the effort involved in maintaining the qualified state (QS).   The model presented here ensures QS Integrity so that requalification can be avoided at all costs.   
Step-by-Step Guide
AWS CloudFormation Template Qualification
Step 1:  Transform your infrastructure requirements into code.  This can be done by using AWS CloudFormation.  Using CloudFormation you create a template for your infrastructure.  Then, test your template and hand it over to IT Quality.  
Step 2: Build a "continuous" qualification framework to qualify your template.  This framework can automatically deploy the template in a TEST environment and perform various tests to ensure all the requirements are met.  Also, the test execution reports with all evidence are automatically generated.   IT Quality reviews the results and then certifies the CloudFormation template.  
Step 3: Publish the qualified CloudFormation template to the Service Catalog.   Your global teams can deploy this template any number of times without worrying about qualification (Qualify it once and use it many times!).
AWS Monitoring Toolset Qualification
In order to maintain your AWS Infrastructure in a Qualified State, you need to create a "qualified" catalog of AWS Monitoring Services.   Such services include AWS Config, AWS CloudTrail, AWS Systems Manager, Amazon GuardDuty, Amazon Inspector, etc.
Step 4: Build a "continuous" qualification framework to qualify each monitoring service.  This framework can automatically qualify each service in a TEST environment and perform various tests to ensure all the requirements are met.  IT Quality reviews the results and then releases the service for Production Use.  
Step 5: Now each of the qualified monitoring services can be used globally without worrying about qualification.
Build Qualified AWS Infrastructure and Go Live
Step 6: Now your IT teams can build "qualified" infrastructure using CloudFormation templates from the Service Catalog.  Also, they can monitor them using "qualified" services in order to maintain the "qualified state" of the infrastructure.
Conclusion
By following the above six (6) steps, you can build and maintain a qualified infrastructure that is GxP compliant and always "audit ready".    The above framework uses the mantra "qualify it once and use it many times", thus not only making it cost efficient but also baking in the best practices.  
Frequently Asked Questions (FAQ)
AWS releases changes constantly.  How will I maintain change control? A true "cloud" is designed to constantly release changes so that the end customer can leverage these innovations and thus increase productivity.  AWS is no exception here.  However, this presents a dilemma for the traditional validation folks who are used to reviewing each change and then addressing it one way or the other.   If you want to embrace the Cloud, the compliance perspective must change from examining every change by the Cloud Provider (which is practically impossible considering the velocity of changes) to ensuring your requirements are met constantly.  This can be achieved with the "continuous" validation framework where by you are constantly (for example:  daily) testing to ensure your requirements are met in spite of the changes.    
How does "continuous" qualification really work? Continuous Qualification is GxP compliant and based on a sophisticated Model Based Testing Framework.   Once the Continuous Qualification model is built, it can be used to perform initial qualification of an infrastructure template or a monitoring service, for example.  It can be run at regular intervals to "continuously qualify" with no human intervention ("lights out mode").   This approach enables cost effective testing on a continuous basis thus enabling deployment of GxP workloads in the public cloud.  
What are the real advantages of qualifying an Infrastructure Template? Before the advent of "software defined" data center, the only way to qualify was to build first and then perform the qualification.  Now with AWS Cloud, building infrastructure is akin to writing code.  In other words, you script the infrastructure you want to build and then run it as many times as you want to consistently and within minutes stand up your virtual data center.   Now, this makes qualification cost effective and implementing IT best practices that much easier.  Amazon does provide downloadable templates that you can start with and then modify them to meet your requirements.  For example, you can stand up a complex environment for SAP implementation within minutes using CloudFormation templates.    You build the template using best practices (for regulatory, security compliance, etc..) that meet your specific requirements.  You then qualify the template and make it available in the Service Catalog for consumption throughout your organization.  
Why do we need the monitoring tools after the template is qualified? Once the infrastructure is built and deployed, you need to monitor its health and also ensure that the Qualified State (QS) drift has not occurred.   QS drift can occur when changes to the deployed infrastructure are made (either intentional or unintentional) after it is qualified by bypassing the change control process.   AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.  This service will enable compliance with the Qualified State (QS).  It will bring to your notice if a QS Drift has occurred. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure.  This service provides the audit trail transactions that can be used by other AWS Services (e.g. AWS Config, AWS SNS). AWS Systems Manager gives you visibility and control of your infrastructure on AWS. With Systems Manager you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.  It helps you easily understand and control the current state of your resource groups and view detailed system configurations, operating system patch levels, software installations, application configurations, and other details about your environment. Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and GxP workloads.  It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise thus violating your Qualified State. Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices.  
What other are measures should I consider to ensure Qualified State Integrity? You need to design your infrastructure based on IT best practices that meet your business, security and compliance requirements.  For example, you can define encryption requirements (forcing server side encryption for S3 objects), permissions to resources (which roles apply to certain environments), which compute images are authorized (based on hardened images of servers you have authorized), and what kind of logging needs to be enabled (such as enforcing the use of CloudTrail on applicable resources).  Such security best practices can be enforced by using CloudFormation templates. Our goal is to create a GxP audit-ready environment.  For example, AWS Config allows you to capture the current state of any environment, which can then be compared with your “secure environment” rules. You can ensure that the controls are operating 100 percent at any point in time, versus traditional audit sampling methods or point-in-time reviews.
Further Reading...
GxP Cloud on AWS
Automating Security, Compliance, and Governance in AWS
Service Based Accretive GxP Qualification Model For Cloud Infrastructure
Continuous Qualification for AWS or Microsoft Azure - How is it done?
Continuous Validation - How Is It Done? 
0 notes
xlmcv · 8 years ago
Text
How are you fulfilling the FDA's Audit Trail expectations for Data Integrity?
FDA put out its DRAFT guidance on Data Integrity and Compliance with cGMP in April 2016. This guidance is important because we come to understand FDA's thinking on Data Integrity.
"For the purposes of this guidance, data integrity refers to the completeness, consistency, and accuracy of data. Complete, consistent, and accurate data should be Attributable, Legible, Contemporaneously recorded, Original or a true copy, and Accurate (ALCOA)."  -- FDA Guidance on Data Integrity and Compliance With CGMP, April 2016
My focus in this post will be limited to IT applications. Needless to say the guidance has big repercussions for anyone in areas of IT Quality and Computer Systems Validation. Many predicate rules that were written for the paper world still hold good and can be applied to e-records as well. FDA does cite quite a few of them in the guidance.
The two words "Data Integrity" has far reaching implications for computer systems. Any FDA regulated company need to carefully look at their CSV program and revisit, but not limited to, the following areas:
System Design & Implementation
Validation
User Access Controls
Segregation of Duties
Audit Trail Design and Capture
Review of Audit Trails
Data Backup and Archiving
Data Retention
Disaster Recovery & Business Continuity (BCP)
Electronic Signature Design & Implementation
Training
I would like to focus on only one topic in this post: Audit Trail reviews. There can be two (2) types of reviews:
Regular Review: Audit Trails that need to be reviewed with the "parent GxP record".
"FDA recommends that audit trails that capture changes to critical data be reviewed with each record and before final approval of the record. Audit trails subject to regular review should include, but are not limited to, the following: the change history of finished product test results, changes to sample run sequences, changes to sample identification, and changes to critical process parameters."  -- FDA Guidance on Data Integrity and Compliance With CGMP, April 2016
Scheduled Review: Audit Trails that need to be reviewed at regular intervals.
"FDA recommends routine scheduled audit trail review based on the complexity of the system and its intended use."  -- FDA Guidance on Data Integrity and Compliance With CGMP, April 2016
Let us take regular review of audit trails. The expectation here is that the audit trail associated with a "critical GxP record" must be reviewed along with the record itself (e.g.: a batch record). A "data integrity friendly" application must present the audit trail data to the reviewer in a way that is easily understandable and user friendly for this criteria to be met. Otherwise the reviewer will be spending 10x the time reviewing the audit trail rather than the parent record. Most modern and well designed IT apps can accomplish this out of the box (e.g.: Atlassian Jira).
Now let us move on to the trickier "scheduled review". Anyone who is familiar with an Enterprise application realizes that this can be a daunting task. Even if you decide to perform such reviews, you need to ask: How often? What to Review? How to make this a value added exercise? What is the cost of doing such reviews? How many resources do I need?
Some of the larger pharma companies are mandating that SOPs must be developed and audit trail reviews must be conducted on a periodic basis. System owners and administrators are struggling with this mandate. Most of the legacy applications generate logs and audit trails that can be read only by software programmers rather than end users. And also the sheer volume of audit trail records make this an insurmountable task (for example: a cloud enterprise application can generate hundreds of audit trail records for a single workflow execution). Imagine a situation where one has to review a million records generated in one quarter!
Big Data Analytics and Machine Learning can be your savior. A Big Data system can be setup to consume audit trail logs and records and display meaningful information in a dashboard format. Such a system can also send notifications of any unusual activity. Also, such a system can categorize transactions and provide statistical information. Such an Enterprise system can consume logs and records from various apps and perform bulk of the scheduled reviews.
Such a system can:
Look for unusual login activity
Monitor record deletion (if such an activity is not permissible)
Monitor changes to critical system configuration records
Monitor user role changes
Monitor abnormal, disallowed or unusual record state changes
Monitor system logs for critical application errors and correlate them with user activity
And much more...
There are many advantages in designing such a Big Data system. Once a good baseline is established by providing it with historical data, it can learn from it and flag unusual activity in near real-time. For example: such a system can flag a particular sequence of record status change as "unexpected" based on the historical data.
An intelligent Big Data system can be your big friend to handle audit trail reviews. Such a system should be integral part of your data integrity and cyber-compliance toolset.
"xLM provides Big Data Analytics for Audit Trail reviews as part of its Continuous Validation managed service."
0 notes
xlmcv · 8 years ago
Text
Continuous Validation for Track & Trace Enterprise Application
Scenario: 
Any Track and Trace Enterprise application is a complex implementation which has many interfaces.  What are the requirements of a validation toolset for such an application?  Let us take a look.
A typical architecture diagram is as shown below.  The Pharmaceutical Company manages the Master Data, Serial Number Management, Partner Management, Country Specific Compliance, etc..   This company has to manage interfaces to its Contract Manufacturers & Packagers, 3PLs and Customers.  
The Track and Trace Software App is constantly changing:
Regulatory Changes
Data Schema Changes
Evolving Industry Standards
Country Specific Deadlines
Add/Remove CMOs, 3PLs, CPOs
Software Patches
New Releases
The requirements for the validation toolset are:
Continuous Validation Platform:
The validation platform that is required to meet the qualification challenges of a modern, complex Track and Trace enterprise app is in itself must be sophisticated.  xLM has developed a platform and a managed service that stacks up to this challenge.  Click here for xLM Platform details, click here to see how xLM Continuous Validation Service works and click here for xLM's process flow. 
What is the Bottomline?
Conclusion:
xLM is well positioned to provide a cost effective and efficient managed service that fulfills the requirements of the Track and Trace scenario.  Our platform is designed to accommodate high velocity of changes with built-in support for robust testing strategies.  It incorporates modern technology and frameworks to ensure data integrity requirements are met as expected by the Regulatory Agencies.   xLM is the right track to trace validation!
0 notes
xlmcv · 8 years ago
Text
Continuous Qualification for AWS or Microsoft Azure - How is it done?
The question is "Can Amazon AWS or Microsoft Azure be qualified so that I Life Science companies can run validated apps on it?"   
Cloud releases updates and changes at such great velocities that leaves all validation specialists continuously wondering on how to qualify it.  Traditional strategies including GAMP 5 was never designed to address continuous changes.  These models were designed for a "waterfall" world and not an "agile" one.  It is not possible to qualify an infrastructure where changes are released without release notes with such an outdated mindset and toolset.
Does this mean that the Life Science world should stay in the dark ages with their paper based, wet signature toolset and a brick and mortar data center?  You need continuous Qualification for Continuous Compliance that works better than ever, just like the Cloud!
Here is a step by step guide to Continuous Qualification:
Step 1: Design your BASE Cloud Infrastructure (combination of IaaS / PaaS)
Step 2: Divide your BASE into logical Services
Core Services (e.g. Networking, Compute)
Management Services (e.g. Auditing, Monitoring)
Step 3: Build your DEV BASE IaaS/PaaS.  Finalize and test your configuration.
Step 4: Develop a Qualification Plan / Governance Framework
Step 5: Subscribe to our IaaS / PaaS Qualification Service.  We have built a sophisticated infrastructure to qualify Microsoft Azure or AWS Services.  Our Service Qualification coverage is unmatched.  We post the following on a "Weekly" basis on our customer portal
Service Qualification Plan
Service Requirements Specification (with Comprehensive Coverage)
Service Risk Assessment
Service Qualification Protocols & Execution Reports
Service Qualification Summary Reports
Service Release Certificates
Step 6:  You don't have to Qualify any Service.  We will do that for you on a continuous basis.  Our "continuous" qualification platform for each Service will provide documented evidence that your Service continuous to work as designed (irrespective of many changes pushed by the Cloud Vendor)
Step 7:  Deploy PROD BASE IaaS/PaaS
Step 8: Add or Modify Services to build out your Cloud under Change Control
0 notes
xlmcv · 8 years ago
Text
Continuous Validation - How is it done? - Part 3: Process Flow
This blog post is Part 3 in this series. What are the key process steps involved in continuously validating a Cloud App? (click here for Part 1; click here for Part 2)
STEP 1
Customer will provide the User Requirements (URS) to xLM. 
STEP 2
xLM IT will create a new xLM instance on its qualified infrastructure.  This new instance is validated.
A Validation Plan and Risk Assessments are created.
A new SUT (System Under Test) is created in xLM.  xLM team will
input the requirements
perform risk assessment working with the customer
create specifications (design, configuration, etc..)
create test cases (automated, manual).  All test cases are pre-approved by the customer.
Bi-directional traceability (Requirements <> Specifications <> Test Cases) is automatically generated by the system.
Customer approves all validation deliverables.
STEP 3
Test Scripts (IQ, OQ and/or UAT) are executed in the Cloud App QA Environment.
Customer approves the Validation Summary Report (automatically generated by xLM)
STEP 4
A Release Certificate will be issued by xLM.
Cloud App released to Production
STEP 5
Whenever a patch or upgrade is released by the Cloud App Vendor(s),  xLM will perform Compliance Analysis and a report will be issued to the customer.  Updates to validation deliverables is managed under change control.
Once this report is approved by the customer, xLM will create a test campaign (Regression Testing, New Scripts for New Feature Testing) and will execute this campaign in QA Environment.  This will be performed before the patch/upgrade is deployed in the PROD Environment.
A Release Certificate will be issued by xLM.
STEP 6
Continuous Validation:  Regression/Smoke tests are constantly run to ensure that the cloud app is in a "validated" state.
STEP 7
Customer will have 24/7 access to real-time "Cloud App Validation Health" portal.
Related Links
Why should you select xLM as the Validation Service Provider for your Cloud App?
What are the technical details of xLM Managed Validation Service?
0 notes
xlmcv · 9 years ago
Text
CONTINUOUS VALIDATION - HOW IS IT DONE? - PART 2: VALIDATION
This blog post is Part 2 in this series. What are the key steps involved in continuously validating a Cloud App? (click here for Part 1)
The above diagram depicts the key elements of a Continuous Validation Program for a Cloud App. One has to bear in mind that the underlying IaaS and PaaS infrastructure is constantly changing. In fact, the Cloud App itself is continuously changing. In the new cloud world very rarely you are given the option of pinning to an "ancient" version. Thus this Continuous Validation Framework is designed to mitigate these risks and ensure that your cloud app is maintained in a validated state.
"Continuous validation is providing documented evidence to certify that a cloud app not only met the pre-established acceptance criteria, but “continuous” to meet thus mitigating the risk of unknown changes."
Requirements Definition: This step provides the foundation for the continuous validation framework. You need to clearly specify your Functional, Non-functional, Regulatory, Performance, Security, Logging, Disaster Recovery, Interface requirements.  
Risk Assessment: Perform a thorough risk assessment to ensure business continuity, regulatory compliance, etc. are considered. In my experience very rarely a logical, useful risk assessment is performed, let alone applying it to testing strategies. The output of the risk assessment must be applied to your testing strategies to determine: What features to test? What Platform combinations to test on? What should be the extent of negative testing? What type of testing strategies to utilize (for eg: datasets to use, N Pair Testing)?  
Specification Definition: Specify IaaS/PaaS, Configuration, Workflow, Interface, Security, Log Management, etc.. specifications to meet your requirements.  
Test Automation Scripts: I recommend using a Model Based Test Automation framework for developing various models to validate your cloud app. Always separate your model from data which will be utilized during runtime. You can use a data designer to generate test data including randomized data. You can also use combinatorial testing strategies to reduce the number of iterations while increasing your test effectiveness.  
Test Model Validation: The test automation model must be validated to ensure that it is meets the specified objectives. This validation effort can be minimized by designed your model to generate good execution reports. A well designed model execution report provides enough evidence so that it can to the extent possible becomes self-validating.  
Test Execution: The model based test automation approach provides you with the flexibility to repurpose the same model to conduct various types of tests (smoke, regression, greedy path, optimal path, load, performance, etc..). Also, such a framework lends itself more conducive to updates (remember your cloud app is constantly changing...so will your test automation framework!). Such framework is well suited for continuously running validation test scripts (say on a daily basis) - you can even randomize your smoke and regression tests continuously!  
Validation Reporting: A robust ALM (Application Lifecycle Management) tool is a must and forms the heart of your Continuous Validation Platform. Your goals must be to remove paper and manual generation of reports (in short: Microsoft Word will not play any role here!). Your ALM tool will provide real-time dashboards, KPIs, summary reports, test deviation reports and more. Your real-time Validation Health Dashboard becomes a reality.
0 notes
xlmcv · 9 years ago
Text
Continuous Validation - How is it done? - Part 1: Infrastructure
What do you need to implement a true continuous validation platform for a Cloud App?  This series of blog posts will answer this question.  This blog post is Part 1 in the series and covers the infrastructure required to implement this platform.
The above diagram illustrates the key elements of a Continuous Validation Infrastructure for a Cloud App. 
CORE: ALM (Application Lifecycle Management) Server
Requirements Management
Specification Management
Risk Management
Traceability
Test Case Management
Test Campaign Management
Reporting
Part 11 Compliant
ALM Server Controls the Test Automation Server where test automation scripts are deployed.  ALM Server manages the test schedules and configuration.  It determines the test environment also.  For example:  The OS and Browser Type that are required in the client VM that will be used to run the test automation scripts.
Based on the OS and Browser type, the test automation server automatically instantiates a Test VM which is used to validate the SUT (System Under Test - the Cloud App).
All test results are passed back to the ALM Server.
Test Automation Logs and Cloud App Logs are fed into a Big Data Log Analysis App.  Detailed log analysis is performed on an ongoing basis to ensure compliance, review trends, etc..
The ALM Server provides the single source of truth for the Cloud App Validation - GxP Compliant Summary Reports are available for every release of the Cloud App.
0 notes
xlmcv · 9 years ago
Text
Continuous Validation November 2016 Newsletter
Click here for our November 2016 Newsletter.
0 notes
xlmcv · 9 years ago
Text
xLM's Service Based Accretive GxP Qualification Model for Cloud Infrastructure
Not a day goes by without a client posing this question:  Can Amazon AWS or Microsoft Azure be qualified so that I can run my apps on it?  Here is a 9 Step systematic answer to this very popular question.
Step 1:  Design your BASE Cloud Infra (combination of IaaS / PaaS)
Step 2: Divide your BASE into logical Services (Core Services (e.g. Networking, Compute) & Management Services (e.g. Auditing, Monitoring))
Step 3: Build your DEV BASE IaaS/PaaS.  Finalize and test your configuration.
Step 4: Develop a Qualification Plan / Governance Framework
Step 5: Build your QA BASE IaaS/PaaS 
Step 6: Qualify each Service (Each Service at a minimum requires a Service Design Specification, Service Qualification Protocol, SOPs/WIs to manage Operation, Administration
Step 7:  Deploy PROD BASE IaaS/PaaS
Step 8: Add or Modify Services to build out your Cloud under Change Control
Step 9: Build a "continuous" qualification platform for each Service to provide documented evidence that your Service continuous to work as designed (irrespective of many changes pushed by the Cloud Vendor)
0 notes
xlmcv · 9 years ago
Text
How to "Validation Enable" the Cloud?
How to "Validation Enable" the Cloud? Life science companies are adopting the Cloud (Public, Private, Hybrid) but are constantly faced with the "Validation Challenge": How can I validate the Cloud Stack (IaaS, PaaS, SaaS) to meet the various GxPs?  Cloud is designed to be flexible, scalable, constantly upgrading and remove the onus of managing hardware from the end user.  Software Defined Infrastructure is meant to be this way.  As expected, our regulations are not "Software Defined" but stuck in the old "hard" ways.
Does this mean that a Cloud App cannot be qualified?  Does this mean that all Life science companies have to still buy servers, tag the cables and own the data centers?  Absolutely not!  They have to change their validation toolset.   You will need a validation approach that is "Software Defined"  and not "Hard Copy Defined".  If any company tries to validate a Cloud App using their rusty, paper based manual processes, they are bound to fail.  This is for a very simple reason that Cloud is ever changing and meant to be that way. We call this "Software Defined Validation" as "Continuous Validation".  Now let us define "Continuous Validation".  "Continuous validation is providing documented evidence to certify that a cloud app not only met the pre-established acceptance criteria, but “continuous” to meet thus mitigating the risk of unknown changes." The infographic below explains continuous validation further.
The infographic below lays out the features of a continuous validation platform.
Continuous validation is not just a "point in time" validation.  It is a type of validation which connects various points in time (initial, patch, upgrade validation) with continuous smoke and regression testing.  This feature provides the documented evidence that the Cloud App worked well not just at discrete points in time in the past, but continues to function as expected in the present.  This mitigates the risk of any change in either the IaaS or PaaS layer that can potentially alter its behavior.   Continuous Validation needs a new toolset that has no paper and no manual testing.  Such a toolset consists of a 21 CFR Part 11 compliant, highly sophisticated Application Lifecycle Management (ALM) combined with a robust test automation framework.  Of course, a QMS framework to manage this toolset is an absolute must.
0 notes
xlmcv · 9 years ago
Text
Test Data Generation in Test Automation
Test data generation is an integral aspect of test bed setup which can make or break your testing strategy.  Test data generation is an integral part of Test Automation.  Otherwise you risk losing the comprehensiveness and validity of your testing process.
Test Data Generation
One of the major challenges in Test Automation is comprehensive test data generation.  It could be an expensive, lengthy and complex affair with high maintenance cost to keep it up to date with each release.
"Not enough test data? Test data are incomplete, old or unavailable? Missing important test scenarios? All these are problems of the past, provided you have chosen the right tools and approaches for designing your Test Automation Framework."
A Reliable and an efficient approach is to have an inbuilt engine for test data generation that can be plugged into your Test Automation jobs for continuous operation in a continuously validated environemnt.
Test Data Quality
A good set of application test data consists of values that can either be valid, invalid, complete, incomplete, null, illegal or out of range.  Library of inbuilt functions that can generate such values should be part of a robust Test Automation framework.   
Combinatorial Test Data
From two field pair to multi-field full combinatorial test data generation in traditional world is a tricky job, not only because of volume limits but also because of finding proper satisfying conditions. Using dynamic data generation with rule-based test data filtering is the most efficient way to go. If you have an inbuilt engine to do just this, it can transform the effectiveness of your Test Automation rig.
Dynamic and Random
Random generation of field values can be an effective approach to uncover blind spots in an application functionality. Use of dynamic variables to randomize test data assures you of uncovering any anomaly in the application.
< -- Previous Article in the Series
0 notes